Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In this paper, we present a polarimetric image restoration approach that aims to recover the Stokes parameters and the degree of linear polarization from their corresponding degraded counterparts. The Stokes parameters and the degree of linear polarization are affected due to the degradations present in partial occlusion or turbid media, such as scattering, attenuation, and turbid water. The polarimetric image restoration with corresponding Mueller matrix estimation is performed using polarization-informed deep learning and 3D Integral imaging. An unsupervised image-to-image translation (UNIT) framework is utilized to obtain clean Stokes parameters from the degraded ones. Additionally, a multi-output convolutional neural network (CNN) based branch is used to predict the Mueller matrix estimate along with an estimate of the corresponding residue. The degree of linear polarization with the Mueller matrix estimate generates information regarding the characteristics of the underlying transmission media and the object under consideration. The approach has been evaluated under different environmentally degraded conditions, such as various levels of turbidity and partial occlusion. The 3D integral imaging reduces the effects of degradations in a turbid medium. The performance comparison between 3D and 2D imaging in varying scene conditions is provided. Experimental results suggest that the proposed approach is promising under the scene degradations considered. To the best of our knowledge, this is the first report on polarization-informed deep learning in 3D imaging, which attempts to recover the polarimetric information along with the corresponding Mueller matrix estimate in a degraded environment.more » « less
-
It is generally assumed that oceanic effects, such as absorption, scattering, and turbulence, deteriorate underwater optical imaging and/or signal detection. In this paper, we present an interesting observation that slight turbidity may actually improve the performance of underwater optical imaging in the presence of occlusion. We have carried out simulations and optical experiments in underwater degraded environments to investigate this hypothesis. For simulation, the Monte Carlo method was used to analyze the influence of imaging performance under varying turbidity and occlusion conditions. Additionally, optical experiments were conducted under various turbid and partially occluded environments. We considered the effects of different parameters such as varying turbidity levels, severity of partial occlusion, number of photons, propagation distances, and imaging modality. The simulation results we performed suggest that, regardless of the variation of the imaging system and degradation parameters, slight turbidity may improve underwater imaging performance in occlusion. The optical experimental results are also in agreement with the simulation results that slightly increasing the turbidity levels may boost the image quality in the scenarios we considered. To the best of our knowledge, this is the first report to theoretically analyze and experimentally validate the phenomenon that turbidity may improve underwater imaging performance in certain degraded environments.more » « less
-
In this paper, we propose a procedure to analyze lensless single random phase encoding (SRPE) systems to assess their robustness to variations in image sensor pixel size as the input signal frequency is varied. We use wave propagation to estimate the maximum pixel size to capture lensless SRPE intensity patterns such that an input signal frequency can be captured accurately. Lensless SRPE systems are contrived by placing a diffuser in front of an image sensor such that the optical field coming from an object can be modulated before its intensity signature is captured at the image sensor. Since diffuser surfaces contain very fine features, the captured intensity patterns always contain high spatial frequencies regardless of the input frequencies. Hence, a conventional Nyquist-criterion-based treatment of this problem would not give us a meaningful characterization. We propose a theoretical estimate on the upper limit of the image sensor pixel size such that the variations in the input signal are adequately captured in the sensor pixels. A numerical simulation of lensless SRPE systems using angular spectrum propagation and mutual information verifies our theoretical analysis. The simulation estimate of the sampling criterion matches very closely with our proposed theoretical estimate. We provide a closed-form estimate for the maximum sensor pixel size as a function of input frequency and system parameters such that an input signal frequency can be captured accurately, making it possible to optimize general-purpose SRPE systems. Our results show that lensless SRPE systems have a much greater robustness to sensor pixel size compared with lens based systems, which makes SRPE useful for exotic imagers when pixel size is large. To the best of our knowledge, this is the first report to investigate sampling of lensless SRPE systems as a function of input image frequency and physical parameters of the system to estimate the maximum image sensor pixel size.more » « less
-
Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered.more » « less
-
In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.more » « less
-
Integral imaging has proven useful for three-dimensional (3D) object visualization in adverse environmental conditions such as partial occlusion and low light. This paper considers the problem of 3D object tracking. Two-dimensional (2D) object tracking within a scene is an active research area. Several recent algorithms use object detection methods to obtain 2D bounding boxes around objects of interest in each frame. Then, one bounding box can be selected out of many for each object of interest using motion prediction algorithms. Many of these algorithms rely on images obtained using traditional 2D imaging systems. A growing literature demonstrates the advantage of using 3D integral imaging instead of traditional 2D imaging for object detection and visualization in adverse environmental conditions. Integral imaging’s depth sectioning ability has also proven beneficial for object detection and visualization. Integral imaging captures an object’s depth in addition to its 2D spatial position in each frame. A recent study uses integral imaging for the 3D reconstruction of the scene for object classification and utilizes the mutual information between the object’s bounding box in this 3D reconstructed scene and the 2D central perspective to achieve passive depth estimation. We build over this method by using Bayesian optimization to track the object’s depth in as few 3D reconstructions as possible. We study the performance of our approach on laboratory scenes with occluded objects moving in 3D and show that the proposed approach outperforms 2D object tracking. In our experimental setup, mutual information-based depth estimation with Bayesian optimization achieves depth tracking with as few as two 3D reconstructions per frame which corresponds to the theoretical minimum number of 3D reconstructions required for depth estimation. To the best of our knowledge, this is the first report on 3D object tracking using the proposed approach.more » « less
-
We propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion. The 1D diffuser-based lensless camera array is used to capture the transmitted information. The captured pseudorandom patterns are then classified through the 1DInImCNN to output the desired signal. We compared our proposed underwater lensless optical signal detection system with an equivalent lens-based underwater optical signal detection system in terms of detection performance and computational cost. The results show that the former outperforms the latter. Moreover, we use dimensionality reduction on the lensless pattern and study their theoretical computational costs and detection performance. The results show that the detection performance of lensless systems does not suffer appreciably. This makes lensless systems a great candidate for low-cost compressive underwater optical imaging and signal detection.more » « less
-
Image restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems. In this paper, we introduce a three-dimensional integral imaging-based physics-informed unsupervised CycleGAN algorithm for underwater image descattering and recovery using physics-informed CycleGAN (Generative Adversarial Network). The system consists of a forward and backward pass. The base architecture consists of an encoder and a decoder. The encoder takes the clean image along with the depth map and the degradation parameters to produce the degraded image. The decoder takes the degraded image generated by the encoder along with the depth map and produces the clean image along with the degradation parameters. In order to provide physical significance for the input degradation parameter w.r.t a physical model for the degradation, we also incorporated the physical model into the loss function. The proposed model has been assessed under the dataset curated through underwater experiments at various levels of turbidity. In addition to recovering the original image from the degraded image, the proposed algorithm also helps to model the distribution under which the degraded images have been sampled. Furthermore, the proposed three-dimensional Integral Imaging approach is compared with the traditional deep learning-based approach and 2D imaging approach under turbid and partially occluded environments. The results suggest the proposed approach is promising, especially under the above experimental conditions.more » « less
An official website of the United States government
